10 research outputs found
Simple Kinesthetic Haptics for Object Recognition
Object recognition is an essential capability when performing various tasks.
Humans naturally use either or both visual and tactile perception to extract
object class and properties. Typical approaches for robots, however, require
complex visual systems or multiple high-density tactile sensors which can be
highly expensive. In addition, they usually require actual collection of a
large dataset from real objects through direct interaction. In this paper, we
propose a kinesthetic-based object recognition method that can be performed
with any multi-fingered robotic hand in which the kinematics is known. The
method does not require tactile sensors and is based on observing grasps of the
objects. We utilize a unique and frame invariant parameterization of grasps to
learn instances of object shapes. To train a classifier, training data is
generated rapidly and solely in a computational process without interaction
with real objects. We then propose and compare between two iterative algorithms
that can integrate any trained classifier. The classifiers and algorithms are
independent of any particular robot hand and, therefore, can be exerted on
various ones. We show in experiments, that with few grasps, the algorithms
acquire accurate classification. Furthermore, we show that the object
recognition approach is scalable to objects of various sizes. Similarly, a
global classifier is trained to identify general geometries (e.g., an ellipsoid
or a box) rather than particular ones and demonstrated on a large set of
objects. Full scale experiments and analysis are provided to show the
performance of the method
Learning Haptic-based Object Pose Estimation for In-hand Manipulation Control with Underactuated Robotic Hands
Unlike traditional robotic hands, underactuated compliant hands are
challenging to model due to inherent uncertainties. Consequently, pose
estimation of a grasped object is usually performed based on visual perception.
However, visual perception of the hand and object can be limited in occluded or
partly-occluded environments. In this paper, we aim to explore the use of
haptics, i.e., kinesthetic and tactile sensing, for pose estimation and in-hand
manipulation with underactuated hands. Such haptic approach would mitigate
occluded environments where line-of-sight is not always available. We put an
emphasis on identifying the feature state representation of the system that
does not include vision and can be obtained with simple and low-cost hardware.
For tactile sensing, therefore, we propose a low-cost and flexible sensor that
is mostly 3D printed along with the finger-tip and can provide implicit contact
information. Taking a two-finger underactuated hand as a test-case, we analyze
the contribution of kinesthetic and tactile features along with various
regression models to the accuracy of the predictions. Furthermore, we propose a
Model Predictive Control (MPC) approach which utilizes the pose estimation to
manipulate objects to desired states solely based on haptics. We have conducted
a series of experiments that validate the ability to estimate poses of various
objects with different geometry, stiffness and texture, and show manipulation
to goals in the workspace with relatively high accuracy
Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive
In communication between humans, gestures are often preferred or
complementary to verbal expression since the former offers better spatial
referral. Finger pointing gesture conveys vital information regarding some
point of interest in the environment. In human-robot interaction, a user can
easily direct a robot to a target location, for example, in search and rescue
or factory assistance. State-of-the-art approaches for visual pointing
estimation often rely on depth cameras, are limited to indoor environments and
provide discrete predictions between limited targets. In this paper, we explore
the learning of models for robots to understand pointing directives in various
indoor and outdoor environments solely based on a single RGB camera. A novel
framework is proposed which includes a designated model termed PointingNet.
PointingNet recognizes the occurrence of pointing followed by approximating the
position and direction of the index finger. The model relies on a novel
segmentation model for masking any lifted arm. While state-of-the-art human
pose estimation models provide poor pointing angle estimation accuracy of
28deg, PointingNet exhibits mean accuracy of less than 2deg. With the pointing
information, the target is computed followed by planning and motion of the
robot. The framework is evaluated on two robotic systems yielding accurate
target reaching
AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with Zero-Shot Learning Capability
Tactile sensing is a necessary capability for a robotic hand to perform fine
manipulations and interact with the environment. Optical sensors are a
promising solution for high-resolution contact estimation. Nevertheless, they
are usually not easy to fabricate and require individual calibration in order
to acquire sufficient accuracy. In this letter, we propose AllSight, an optical
tactile sensor with a round 3D structure potentially designed for robotic
in-hand manipulation tasks. AllSight is mostly 3D printed making it low-cost,
modular, durable and in the size of a human thumb while with a large contact
surface. We show the ability of AllSight to learn and estimate a full contact
state, i.e., contact position, forces and torsion. With that, an experimental
benchmark between various configurations of illumination and contact elastomers
are provided. Furthermore, the robust design of AllSight provides it with a
unique zero-shot capability such that a practitioner can fabricate the
open-source design and have a ready-to-use state estimation model. A set of
experiments demonstrates the accurate state estimation performance of AllSight
Method to Develop Legs for Underwater Robots: From Multibody Dynamics with Experimental Data to Mechatronic Implementation
Exploration of the seabed may be complex, and different parameters must be considered for a robotic system to achieve tasks in this environment, such as soil characteristics, seabed gait, and hydrodynamic force in this extreme environment. This paper presents a gait simulation of a quadrupedal robot used on a typical terrigenous sediment seabed, considering the mechanical properties of the type of soil, stiffness, and damping and friction coefficients, referenced with the specialized literature and applied in a computational multibody model with many experimental data in a specific underwater environment to avoi hydrodynamic effects. The requirements of the positions and torque in the robot’s active joints are presented in accordance with a 5R mechanism for the leg and the natural pattern shown in the gait of a dog on the ground. These simulation results are helpful for the design of a testbed, with a leg prototype and its respective hardware and software architecture and a subsequent comparison with the real results
Control Strategy of an Underactuated Underwater Drone-Shape Robot for Grasping Tasks
In underwater environments, ensuring people’s safety is complicated, with potentially life-threatening outcomes, especially when divers have to work in deeper conditions. To improve the available solutions for working with robots in this kind of environment, we propose the validation of a control strategy for robots when taking objects from the seabed. The control strategy proposed is based on acceleration feedback in the model of the system. Using this model, the reference values for position, velocity and acceleration are estimated, and then the position error signal can be computed. When the desired position is obtained, it is possible to then obtain the position error. The validation was carried out using three different objects: a ball, a bottle, and a plant. The experiment consisted of using this control strategy to take those objects, which the robot carried for a moment to validate the stabilisation control and reference following the control in terms of angle and depth. The robot was operated by a pilot from outside of the pool and was guided using a camera and sonar in a teleoperated way. As an advantage of this control strategy, the model upon which the robot is based is decoupled, allowing control of the robot for each uncoupled plane, this being the main finding of these tests. This demonstrates that the robot can be controlled by a control strategy based on a decoupled model, taking into account the hydrodynamic parameters of the robot